An unconventional robust integrator for dynamical low-rank approximation

نویسندگان

چکیده

Abstract We propose and analyse a numerical integrator that computes low-rank approximation to large time-dependent matrices are either given explicitly via their increments or the unknown solution matrix differential equation. Furthermore, is extended of tensors by Tucker fixed multilinear rank. The proposed different from known projector-splitting for dynamical approximation, but it retains important robustness small singular values has so far been only integrator. new also offers some potential advantages over integrator: It avoids backward time integration substep integrator, which potentially unstable dissipative problems. more parallelism, preserves symmetry anti-symmetry tensor when equation does. Numerical experiments illustrate behaviour

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dynamical low-rank approximation

In low-rank approximation, separation of variables is used to reduce the amount of data in computations with high-dimensional functions. Such techniques have proved their value, e.g., in quantum mechanics and recommendation algorithms. It is also possible to fold a low-dimensional grid into a high-dimensional object, and use low-rank techniques to compress the data. Here, we consider low-rank t...

متن کامل

Dynamical Low-Rank Approximation

For the low rank approximation of time-dependent data matrices and of solutions to matrix differential equations, an increment-based computational approach is proposed and analyzed. In this method, the derivative is projected onto the tangent space of the manifold of rank-r matrices at the current approximation. With an appropriate decomposition of rank-r matrices and their tangent matrices, th...

متن کامل

Robust Weighted Low-Rank Matrix Approximation

The calculation of a low-rank approximation to a matrix is fundamental to many algorithms in computer vision and other fields. One of the primary tools used for calculating such low-rank approximations is the Singular Value Decomposition, but this method is not applicable in the case where there are outliers or missing elements in the data. Unfortunately this is often the case in practice. We p...

متن کامل

Approximation Algorithms for l0-Low Rank Approximation

For any column A:,i the best response vector is 1, so A:,i1 T − A 0 = 2 n − 1 = 2(1 − 1/n) OPTF 1 OPTF 1 = n Boolean l0-rank-1 Theorem 3. (Sublinear) Given A ∈ 0,1 m×n with column adjacency arrays and with row and column sums, we can compute w.h.p. in time O min A 0 +m + n, ψB −1 m + n log(mn) vectors u, v such that A − uv 0 ≤ 1 + O ψB OPTB . Theorem 4. (Exact) Given A ∈ 0,1 m×n with OPTB / A 0...

متن کامل

Approximation Algorithms for $\ell_0$-Low Rank Approximation

We study the l0-Low Rank Approximation Problem, where the goal is, given anm×nmatrix A, to output a rank-k matrix A for which ‖A′ −A‖0 is minimized. Here, for a matrix B, ‖B‖0 denotes the number of its non-zero entries. This NP-hard variant of low rank approximation is natural for problems with no underlying metric, and its goal is to minimize the number of disagreeing data positions. We provid...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Bit Numerical Mathematics

سال: 2021

ISSN: ['0006-3835', '1572-9125']

DOI: https://doi.org/10.1007/s10543-021-00873-0